mahalanobis metric
Nonparametric Online Regression while Learning the Metric
We study algorithms for online nonparametric regression that learn the directions along which the regression function is smoother. Our algorithm learns the Mahalanobis metric based on the gradient outer product matrix $\boldsymbol{G}$ of the regression function (automatically adapting to the effective rank of this matrix), while simultaneously bounding the regret ---on the same data sequence--- in terms of the spectrum of $\boldsymbol{G}$. As a preliminary step in our analysis, we extend a nonparametric online learning algorithm by Hazan and Megiddo enabling it to compete against functions whose Lipschitzness is measured with respect to an arbitrary Mahalanobis metric.
Simultaneous Preference and Metric Learning from Paired Comparisons
A popular model of preference in the context of recommendation systems is the so-called ideal point model. In this model, a user is represented as a vector u together with a collection of items x N in a common low-dimensional space. The vector u represents the user's ideal point, or the ideal combination of features that represents a hypothesized most preferred item. The underlying assumption in this model is that a smaller distance between u and an item x j. In the vast majority of the existing work on learning ideal point models, the underlying distance has been assumed to be Euclidean. However, this eliminates any possibility of interactions between features and a user's underlying preferences.
Nonparametric Online Regression while Learning the Metric
We study algorithms for online nonparametric regression that learn the directions along which the regression function is smoother. Our algorithm learns the Mahalanobis metric based on the gradient outer product matrix $\boldsymbol{G}$ of the regression function (automatically adapting to the effective rank of this matrix), while simultaneously bounding the regret ---on the same data sequence--- in terms of the spectrum of $\boldsymbol{G}$. As a preliminary step in our analysis, we extend a nonparametric online learning algorithm by Hazan and Megiddo enabling it to compete against functions whose Lipschitzness is measured with respect to an arbitrary Mahalanobis metric.
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland (0.04)
- Europe > Italy (0.04)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- Oceania > Australia > South Australia > Adelaide (0.04)
- (4 more...)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.46)
Metric Learning for Temporal Sequence Alignment
Rémi Lajugie, Damien Garreau, Francis Bach, Sylvain Arlot
In this paper, we propose to learn a Mahalanobis distance to perform alignment of multivariate time series. The learning examples for this task are time series for which the true alignment is known. We cast the alignment problem as a structured prediction task, and propose realistic losses between alignments for which the optimization is tractable. We provide experiments on real data in the audio-toaudio context, where we show that the learning of a similarity measure leads to improvements in the performance of the alignment task. We also propose to use this metric learning framework to perform feature selection and, from basic audio features, build a combination of these with better alignment performance.
Simultaneous Preference and Metric Learning from Paired Comparisons
A popular model of preference in the context of recommendation systems is the so-called ideal point model. In this model, a user is represented as a vector u together with a collection of items x1 ... xN in a common low-dimensional space. The vector u represents the user's "ideal point," or the ideal combination of features that represents a hypothesized most preferred item. The underlying assumption in this model is that a smaller distance between u and an item xj indicates a stronger preference for xj. In the vast majority of the existing work on learning ideal point models, the underlying distance has been assumed to be Euclidean. However, this eliminates any possibility of interactions between features and a user's underlying preferences.
Reviews: Nonparametric Online Regression while Learning the Metric
This paper describes a novel algorithm for online nonparametric regression problem. It employs Mahalanobis metric to obtain a better distance measurement in the traditional online nonparametric regression framework. In terms of theoretical analysis, the proposed algorithm improves the regret bound and achieves a competitive result to the state-of-the-art. The theoretical proof is well organized and correct to the reviewer. However, the novelty of the proposed algorithm may be limited.
Nonparametric Online Regression while Learning the Metric
Ilja Kuzborskij, Nicolò Cesa-Bianchi
We study algorithms for online nonparametric regression that learn the directions along which the regression function is smoother. Our algorithm learns the Mahalanobis metric based on the gradient outer product matrix G of the regression function (automatically adapting to the effective rank of this matrix), while simultaneously bounding the regret --on the same data sequence-- in terms of the spectrum of G. As a preliminary step in our analysis, we extend a nonparametric online learning algorithm by Hazan and Megiddo enabling it to compete against functions whose Lipschitzness is measured with respect to an arbitrary Mahalanobis metric.
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland (0.04)
- Europe > Italy (0.04)
Metric Learning with Multiple Kernels
Metric learning has become a very active research field. The most popular representative-Mahalanobis metric learning-can be seen as learning a linear transformation and then computing the Euclidean metric in the transformed space. Since a linear transformation might not always be appropriate for a given learning problem, kernelized versions of various metric learning algorithms exist. However, the problem then becomes finding the appropriate kernel function. Multiple kernel learning addresses this limitation by learning a linear combination of a number of predefined kernels; this approach can be also readily used in the context of multiple-source learning to fuse different data sources. Surprisingly, and despite the extensive work on multiple kernel learning for SVMs, there has been no work in the area of metric learning with multiple kernel learning. In this paper we fill this gap and present a general approach for metric learning with multiple kernel learning. Our approach can be instantiated with different metric learning algorithms provided that they satisfy some constraints. Experimental evidence suggests that our approach outperforms metric learning with an unweighted kernel combination and metric learning with cross-validation based kernel selection.
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Switzerland > Geneva > Geneva (0.04)